翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Lagrangian multiplier : ウィキペディア英語版
Lagrange multiplier

In mathematical optimization, the method of Lagrange multipliers (named after Joseph Louis Lagrange〔Mécanique Analytique sect. IV, 2 vols. Paris, 1811 https://archive.org/details/mcaniqueanalyt01lagr〕) is a strategy for finding the local maxima and minima of a function subject to equality constraints.
For instance (see Figure 1), consider the optimization problem
:maximize
:subject to .
We need both and to have continuous first partial derivatives. We introduce a new variable () called a Lagrange multiplier and study the Lagrange function (or Lagrangian) defined by
: \mathcal(x,y,\lambda) = f(x,y) + \lambda \cdot g(x,y),
where the term may be either added or subtracted. If is a maximum of for the original constrained problem, then there exists such that is a stationary point for the Lagrange function (stationary points are those points where the partial derivatives of \mathcal are zero). However, not all stationary points yield a solution of the original problem. Thus, the method of Lagrange multipliers yields a necessary condition for optimality in constrained problems.〔.〕〔
*
* 〕〔
〕 Sufficient conditions for a minimum or maximum also exist.
==Introduction==
One of the most common problems in calculus is that of finding maxima or minima (in general, "extrema") of a function, but it is often difficult to find a closed form for the function being extremized. Such difficulties often arise when one wishes to maximize or minimize a function subject to fixed outside conditions or constraints. The method of Lagrange multipliers is a powerful tool for solving this class of problems without the need to explicitly solve the conditions and use them to eliminate extra variables.
Consider the two-dimensional problem introduced above:
:maximize
:subject to .
The method of Lagrange multipliers relies on the intuition that cannot be increasing at a maximum in the direction of any neighboring point where . If it were, we could walk along to get higher, meaning that the starting point wasn't actually the maximum.
We can visualize contours of given by for various values of , and the contour of given by .
Suppose we walk along the contour line with . We are interested in finding points where does not change as we walk, since these points might be maxima. There are two ways this could happen: First, we could be following a contour line of , since by definition does not change as we walk along its contour lines. This would mean that the contour lines of and are parallel here. The second possibility is that we have reached a "level" part of , meaning that does not change in any direction.
To check the first possibility, notice that since the gradient of a function is perpendicular to the contour lines, the contour lines of and are parallel if and only if the gradients of and are parallel. Thus we want points where and
:\nabla_ f = - \lambda \nabla_ g,
for some
where
: \nabla_ f= \left( \frac, \frac \right), \qquad \nabla_ g= \left( \frac, \frac \right).
are the respective gradients. The constant is required because although the two gradient vectors are parallel, the magnitudes of the gradient vectors are generally not equal. (The negative is traditional). This constant is called the Lagrange multiplier.
Notice that this method also solves the second possibility: if is level, then its gradient is zero, and setting is a solution regardless of .
To incorporate these conditions into one equation, we introduce an auxiliary function
: \mathcal(x,y,\lambda) = f(x,y) + \lambda \cdot g(x,y),
and solve
: \nabla_ \mathcal(x , y, \lambda)=0.
This is the method of Lagrange multipliers. Note that \nabla_ \mathcal(x , y, \lambda)=0 implies .
The constrained extrema of are ''critical points'' of the Lagrangian \mathcal, but they are not necessarily ''local extrema'' of \mathcal (see Example 2 below).
One may reformulate the Lagrangian as a Hamiltonian, in which case the solutions are local minima for the Hamiltonian. This is done in optimal control theory, in the form of Pontryagin's minimum principle.
The fact that solutions of the Lagrangian are not necessarily extrema also poses difficulties for numerical optimization. This can be addressed by computing the ''magnitude'' of the gradient, as the zeros of the magnitude are necessarily local minima, as illustrated in the numerical optimization example.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Lagrange multiplier」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.